Last Update: 2025/3/26
Anthropic Completions API
The Anthropic Completions API allows you to generate text completions using Anthropic's language models. This document provides an overview of the API endpoints, request parameters, and response structure.
Endpoint
POST https://platform.llmprovider.ai/v1/complete
Request Headers
Header | Value |
---|---|
x-api-key | YOUR_API_KEY |
anthropic-version | 2023-06-01 |
Content-Type | application/json |
Request Body
The request body should be a JSON object with the following parameters:
Parameter | Type | Description |
---|---|---|
max_tokens_to_sample | integer | The maximum number of tokens to generate before stopping. Required range: x > 1 |
model | string | The model that will complete your prompt. (e.g., claude-2.1 ). |
prompt | string | The prompt to complete. |
metadata | object | (Optional) An object describing metadata about the request. |
stop_sequences | string[] | (Optional) Sequences that will cause the model to stop generating. |
stream | boolean | (Optional) Whether to stream the response using server-sent events. |
temperature | number | (Optional) Amount of randomness injected into the response. Required range: 0 < x < 1 |
top_k | number | (Optional) Only sample from top K options. Required range: x > 0 |
top_p | number | (Optional) Use nucleus sampling. Required range: 0 < x < 1 |
max_tokens | integer | (Optional) The maximum number of tokens to generate. |
Example Request
{
"model": "claude-2.1",
"max_tokens_to_sample": 1024,
"prompt": "\n\nHuman: Hello, Claude\n\nAssistant:"
}
Response Body
The response body will be a JSON object containing the completion and metadata.
Field | Type | Description |
---|---|---|
id | string | Unique identifier for the completion. |
model | string | The model that handled the request. |
completion | string | The generated completion text. |
stop_reason | string | The reason why the completion stopped. |
type | string | The type of completion. |
Example Response
{
"completion": " Hello! My name is Claude.",
"id": "compl_018CKm6gsux7P8yMcwZbeCPw",
"model": "claude-2.1",
"stop_reason": "stop_sequence",
"type": "completion"
}
Example Request
- Shell
- nodejs
- python
curl -X POST https://platform.llmprovider.ai/v1/complete \
-H "x-api-key: $YOUR_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-2.1",
"max_tokens_to_sample": 1024,
"prompt": "\n\nHuman: Hello, Claude\n\nAssistant:"
}'
const axios = require('axios');
const apiKey = 'YOUR_API_KEY';
const url = 'https://platform.llmprovider.ai/v1/complete';
const data = {
model: 'claude-3-5-sonnet-20241022',
max_tokens_to_sample: 1024,
prompt: '\n\nHuman: Hello, Claude\n\nAssistant:'
};
const headers = {
'x-api-key': apiKey,
'anthropic-version': '2023-06-01',
'Content-Type': 'application/json'
};
axios.post(url, data, { headers })
.then(response => {
console.log('Response:', response.data);
})
.catch(error => {
console.error('Error:', error);
});
import requests
import json
api_key = 'YOUR_API_KEY'
url = 'https://platform.llmprovider.ai/v1/complete'
headers = {
'x-api-key': api_key,
'anthropic-version': '2023-06-01',
'Content-Type': 'application/json'
}
data = {
'model': 'claude-3-5-sonnet-20241022',
'max_tokens_to_sample': 1024,
'prompt': '\n\nHuman: Hello, Claude\n\nAssistant:'
}
response = requests.post(url, headers=headers, data=json.dumps(data))
if response.status_code == 200:
print('Response:', response.json())
else:
print('Error:', response.status_code, response.text)
For more details, refer to the Anthropic API documentation.